DETECTING AND PREDICTING CHANGES -1- Detecting and Predicting Changes
نویسندگان
چکیده
When required to predict sequential events, such as random coin tosses or basketball free throws, people reliably use inappropriate strategies, such as inferring temporal structure when none is present. . We investigate the ability of human observers predict sequential events in dynamically changing environments, where there is an opportunity to detect true temporal structure. In two experiments we demonstrate that participants often make correct statistical decisions when asked to infer the hidden state of the data generating process. In contrast, when asked to make predictions about future outcomes, accuracy decreased even though normatively correct responses in the two tasks were identical. A particle filter model accounts for all data, describing both performance in terms of a plausible psychological process. By varying the number of particles, and the prior belief about the probability of a change occurring in the data generating process, we were able to model most of the observed individual differences. DETECTING AND PREDICTING CHANGES -3Many real-world environments involve complex changes over time where behavior that was previously adaptive becomes maladaptive. These dynamic environments require a rapid response to change. For example, stock analysts need to quickly detect changes in the market in order to adjust investment strategies, and coaches need to track changes in a player’s performance in order to adjust team strategy. Reliable change detection requires accurate interpretation of sequential dependencies in the observed data. Research on decision making in probabilistic environments has often called into question our ability to correctly interpret sequential events. When required to predict random events, such as coin tosses, people reliably use inappropriate strategies, such as the famous “Gambler’s Fallacy” identified by Kahneman and Tversky (1973, 1979) – if a random coin is tossed several times, people often believe that a tail becomes more likely after a long run of heads (see also Burns & Corpus, 2004; Sundali, & Croson, 2006). The error of reasoning that underlies the Gambler’s Fallacy is the perception of probabilistic regularities in a sequence where no such regularities are present, because the sequence is in fact truly random. Such perceptions might arise because real-world environments rarely produce truly random sequences – often there really is statistical information in the sequence of events. Therefore, the Gambler’s Fallacy could simply be the result of people projecting their experience of real-world environments onto laboratory tasks. Related work in dynamical systems research using response time tasks paints a complementary picture. When the optimal strategy in a task is to provide a series of independent and identically distributed responses, people often perform sub-optimally. Long-range autocorrelations have been observed, where responses depend on earlier DETECTING AND PREDICTING CHANGES -4responses that occurred quite a long time previously (e.g., Gilden, 2001; Van Orden, Holden, & Turvey, 2003, 2005; Thornton & Gilden, 2005), although not all authors agree on the meaning of the phenomena (e.g., Farrell, Wagenmakers, & Ratcliff, 2006; Wagenmakers, Farrell, & Ratcliff, 2004, 2005). The same criticism applies to dynamical systems research as to the Gambler’s Fallacy – tasks requiring long sequences of stationary and conditionally random responses have questionable ecological validity. Even with real-world environments, people often observe statistical regularities where no such regularities might be present (e.g., Albright, 1993; Gillovich, Vallone & Tversky, 1985). For example when a basketball player makes several successes in a row, observers readily conclude that the player’s underlying skill level has temporarily increased; that the player has a “hot hand”. Observers make these conclusions even when the data are more consistent with random fluctuations than with underlying changes in skill level. A problem with the hot hand phenomenon is the statistical interpretation of the results. The uncontrolled nature of batting averages and basketball successes make the true state of underlying process impossible to know. Even after detailed statistical analyses of data from many games, statisticians are still unsure whether a “hot hand” phenomenon actually exists in the data (Adams, 1992; Kass & Raftery, 1995; Larkey, Smith, & Kadane, 1989; Chatterjee, Yilmaz, Habibullah, & Laudato, 2000). This confusion makes it difficult to draw meaningful conclusions about the rationality of people’s judgments. We investigate the ability of observers to track changes in dynamic environments. In contrast to research on the hot hand phenomenon, we use controlled dynamic environments where we know exactly how the statistical sequence was produced and at DETECTING AND PREDICTING CHANGES -5what time points the changes occurred. This knowledge allows us to improve on prior Hot Hand research because we know the true (but hidden) state of the world, and we can assess observer’s rationality against this benchmark. The dynamic task environment also allows us to advance on prior Gambler’s fallacy research.. In a typical Gambler’s fallacy experiment, observers are asked for predictions about a string of independent observations. The optimal response is to make predictions that are also sequentially independent, leaving just one option for observers to display sub-optimality: by inducing greater-than-zero sequential dependence in their predictions. With this constraint, any variability between observers forces sub-optimality to be observed in the average. In contrast, our dynamic environment includes changes in the hidden state at some trials but not others, which induces some sequential dependence between observations. This can lead to sub-optimality in two distinct ways; observers can believe there are either more or fewer sequential dependencies than actually exist. When an observer detects more change points than really exist, they may behave sub-optimally because they react to perceived changes in the underlying environment that do not exist (e.g., a basketball coach who is prone to seeing hot hands where none are present). Conversely, when an observer detects too few change points, they may fail to adapt to short-lived changes in the environment. This tradeoff between detecting too few and too many change points has often been ignored in previous studies of change detection, which mostly assumed an offline experiment where the task is to identify change points in a complete set of data that were observed earlier (see, e.g., Chinnis & Peterson, 1968, 1970; Massey& Wu, 2005; Robinson, 1964). However, real-world examples are invariably online: data arrive sequentially, and a detection response is required as soon as possible after a change point DETECTING AND PREDICTING CHANGES -6passes, before all the data have been observed. Online change detection is also important in clinical settings, for instance in identifying dorsolateral frontal lobe damage. For example, the widely used Wisconsin Card Sorting Task (Berg, 1948) screens patients according to how often they make perseverative errors – that is, how often they fail to detect a change in the task environment, and continue to apply an outdated and suboptimal strategy. Animal researchers have studied similar behavior in rats (e.g., Gallistel, Mark, King, & Latham, 2001). Rats take some time to detect and adjust to unsignaled changes in reinforcement schedules, but eventually return to probability matching behavior. We develop a particle filter model as a psychologically plausible model for online change detection. Particle filters are Monte Carlo techniques for the estimation of the hidden state of a dynamically evolving system (see Doucet, de Freitas, & Gordon, 2001) and have recently been proposed as a general class of psychological models (Sanborn, Griffiths, & Navarro, 2006; Daw & Courville, 2007). They are attractive as models of human decision making because they require quite simple calculations, and do not require a long memory for past observations. Even with these limitations, particle filter models approach the statistically optimal treatment of the data when endowed with a sufficiently large number of particles – that is, the distribution of particles approaches the full Bayesian posterior distribution, conditional on all prior observations. In addition, by limiting the number of particles, the particle filter is able to mimic suboptimal behavior observed in various tasks such as categorization (e.g., Sanborn et al.). Most interestingly, particle filters allow us to investigate what may be termed “conditionally optimal” models, in which statistically optimal algorithms are applied to incorrect initial DETECTING AND PREDICTING CHANGES -7assumptions. Such models help us to address the question of exactly where the suboptimality in human information processing arises. Using a simple change detection environment, we gather data from two experiments and compare participants’ performance to predictions from a particle filter model. The data reveal interesting trends, including the tradeoff between detecting too many and too few change points, and the model analyses shed light on how this tradeoff may operate. The model analyses also reveals that many participants, particularly in Experiment 1, behave in a “conditionally optimal” manner: data appear as if these participants apply rational statistical processes, but use incorrect estimates of certain environmental parameters. The close-to-optimal behavior of our participants in Experiment 1 stands in contrast to prior research on the fallibility of human decision making. We hypothesized that people have difficulties when they are asked for predictions about the future (as in gambling research, for example). Experiment 2 investigates the differences between predictions about the future vs. inferences about the past state of the environment. We observed a surprisingly large difference between these kinds of responses, and we use the particle filter model to provide a natural explanation for this difference. Change Detection Environments In our experiments, random numbers are presented to an observer, one number at a time. After each new value is presented, the observer is required to respond, either with an inference about the mean value of the process that is currently generating data, or with a prediction for the next value. Figure 1 illustrates the particular set of distributions we used in our experiments, along with some example stimuli and two sets of example DETECTING AND PREDICTING CHANGES -8responses. Each stimulus was sampled from one of four normal distributions, all with the same standard deviation but with different means, shown by the four curves labeled “A” through to “D” in the upper right corner of Figure 1. The 16 crosses below these distributions show 16 example stimuli, and the labels of the distributions from which they arose are shown just to the right of the stimuli; we call these the hidden states because these are unobservable to the participants in our experiments. The five uppermost stimuli were generated from distribution A. Most of these fall close to the mean of distribution A, but there are random fluctuations – for example, the third stimulus is close to the mean of distribution B. After the first five stimuli, the hidden state switches, and three new stimuli are produced from distribution B. The process continues with six stimuli then produced from distribution D and finally two from distribution A again. X X X X X X X X X X X X X X X A !B !C !D! Trial! Example Responses! Inference!
منابع مشابه
Detecting and Modelling the Trend of Change in the Forest Land Use in Garasu Watershed Area Using Landscape Metrics
Detecting, predicting and quantifying the trends of landscape pattern change in the forests of Gharasu watershed area are necessary so as to assess the crises or prevent them. To this aim, the land use maps belonging to the years 1987, 2002 and 2018 were classified through the maximum likelihood method, and the forest area changes were estimated. Then, through the Geomod model and the forest ch...
متن کاملDetecting and Predicting Muscle Fatigue during Typing By SEMG Signal Processing and Artificial Neural Networks
Introduction: Repetitive strain injuries are one of the most prevalent problems in occupational diseases. Repetition, vibration and bad postures of the extremities are physical risk factors related to work that can cause chronic musculoskeletal disorders. Repetitive work on a computer with low level contraction requires the posture to be maintained for a long time, which can cause muscle fatigu...
متن کاملElectrocardiographic changes during dobutamine stress testing in patients with recent myocardial infarction: relation with residual infarct artery stenosis and contractile recovery.
OBJECTIVE The identification of viable but jeopardized myocardium after acute myocardial infarction (AMI) is of great importance for selecting patients who could benefit from a revascularization procedure. The aim of the study was to determine the accuracy of the dobutamine stress electrocardiogram (ECG) 1) for detecting significant stenosis of the infarct-related artery and 2) for predicting t...
متن کاملPredicting Cut Rose Stages of Development and Leaf Color Variations by Means of Image Analysis Technique
The monitor and prediction of crop developmental stages, particularly harvest time, play an important role in planning greenhouse cropping programs and timetables by cut rose producers. There have been many scientific reports on the application of image analysis technology in estimating greenhouse crop growth stages. In the present research, we studied leaf color variations over time by taking ...
متن کاملTechniques for determining the structure and properties of permafrost
Several methods for predicting the relationship between the velocity and the liquid-waterto-water-ice ratio in permafrost are derived and examined, including a modification of the Voigt average for three materials based on critical porosity. Field seismic techniques for detecting anomalous velocity or attenuation changes are reviewed in view of the unique wave propagation problems encountered i...
متن کاملPharmacy Refill Adherence Compared with CD4 Count Changes for Monitoring HIV-Infected Adults on Antiretroviral Therapy
BACKGROUND World Health Organization (WHO) guidelines for monitoring HIV-infected individuals taking combination antiretroviral therapy (cART) in resource-limited settings recommend using CD4(+) T cell (CD4) count changes to monitor treatment effectiveness. In practice, however, falling CD4 counts are a consequence, rather than a cause, of virologic failure. Adherence lapses precede virologic f...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008